20 research outputs found

    Comparing objective visual quality impairment detection in 2D and 3D video sequences

    Get PDF
    The skill level of teleoperator plays a key role in the telerobotic operation. However, plenty of experiments are required to evaluate the skill level in a conventional assessment. In this paper, a novel brain-based method of skill assessment is introduced, and the relationship between the teleoperator's brain states and skill level is first investigated based on a kernel canonical correlation analysis (KCCA) method. The skill of teleoperator (SoT) is defined by a statistic method using the cumulative probability function (CDF). Five indicators are extracted from the electroencephalo-graph (EEG) of the teleoperator to represent the brain states during the telerobotic operation. By using the KCCA algorithm in modeling the relationship between the SoT and the brain states, the correlation has been proved. During the telerobotic operation, the skill level of teleoperator can be well predicted through the brain states. © 2013 IEEE.Link_to_subscribed_fulltex

    VIQID: a no-reference bit stream-based visual quality impairment detector

    Get PDF
    In order to ensure adequate quality towards the end users at all time, video service providers are getting more interested in monitoring their video streams. Objective video quality metrics provide a means of measuring (audio)visual quality in an automated manner. Unfortunately, most of the current existing metrics cannot be used for real-time monitoring due to their dependencies on the original video sequence. In this paper we present a new objective video quality metric which classifies packet loss as visible or invisible based on information extracted solely from the captured encoded H.264/AVC video bit stream. Our results show that the visibility of packet loss can be predicted with a high accuracy, without the need for deep packet inspection. This enables service providers to monitor quality in real-time

    xStreamer: modular multimedia streaming

    Get PDF

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    Presence of broad-spectrum beta-lactamase-producing Enterobacteriaceae in zoo mammals

    Get PDF
    Broad-spectrum beta-lactamase (BSBL)-producing Enterobacteriaceae impose public health threats. With increased popularity of zoos, exotic animals are brought in close proximity of humans, making them important BSBL reservoirs. However, not much is known on the presence of BSBLs in zoos in Western Europe. Fecal carriage of BSBL-producing Enterobacteriaceae was investigated in 38 zoo mammals from two Belgian zoos. Presence of bla-genes was investigated using PCR, followed by whole-genome sequencing and Fourier-transform infrared spectroscopy to cluster acquired resistance encoding genes and clonality of BSBL-producing isolates. Thirty-five putatively ceftiofur-resistant isolates were obtained from 52.6% of the zoo mammals. Most isolates were identified as E. coli (25/35), of which 64.0% showed multidrug resistance (MDR). Most frequently detected bla-genes were CTX-M-1 (17/25) and TEM-1 (4/25). Phylogenetic trees confirmed clustering of almost all E. coli isolates obtained from the same animal species. Clustering of five isolates from an Amur tiger, an Amur leopard, and a spectacled bear was observed in Zoo 1, as well as for five isolates from a spotted hyena and an African lion in Zoo 2. This might indicate clonal expansion of an E. coli strain in both zoos. In conclusion, MDR BSBL-producing bacteria were shown to be present in the fecal microbiota of zoo mammals in two zoos in Belgium. Further research is necessary to investigate if these bacteria pose zoonotic and health risks

    Assessing the importance of audio/video synchronization for simultaneous translation of video sequences

    No full text
    Lip synchronization is considered a key parameter during interactive communication. In the case of video conferencing and television broadcasting, the differential delay between audio and video should remain below certain thresholds, as recommended by several standardization bodies. However, further research has also shown that these thresholds can be relaxed, depending on the targeted application and use case. In this article, we investigate the influence of lip sync on the ability to perform real-time language interpretation during video conferencing. Furthermore, we are also interested in determining proper lip sync visibility thresholds applicable to this use case. Therefore, we conducted a subjective experiment using expert interpreters, which were required to perform a simultaneous translation, and non-experts. Our results show that significant differences are obtained when conducting subjective experiments with expert interpreters. As interpreters are primarily focused on performing the simultaneous translation, lip sync detectability thresholds are higher compared with existing recommended thresholds. As such, primary focus and the targeted application and use case are important factors to be considered when selecting proper lip sync acceptability thresholds

    Distributed video quality monitoring

    No full text

    Extensive video quality evaluation: a scalable video testing platform

    No full text
    With the advent of new upcoming online video services such as IPTV, Video on Demand (VoD) and Peer-to-Peer (P2P) video streaming, content providers are gaining more and more interest in measuring and monitoring video quality as perceived by end-users; also known as Quality of Experience (QoE). Objective video quality metrics provide a means of measuring visual quality degradations but in order to be able to measure QoE, these objective metrics should incorporate all quality affecting parameters such as encoding bitrate, network impairments and error concealment techniques. As a consequence, in order to construct or validate a proper objective video quality metric, extensive video evaluation tests must be performed. In this paper we present a scalable video testing platform that simplifies the management and execution of such video quality evaluation tests. Results indicate that the use of our testing platform drastically reduces overall experiment duration
    corecore